53 research outputs found

    Offline and Online Interactive Frameworks for MRI and CT Image Analysis in the Healthcare Domain : The Case of COVID-19, Brain Tumors and Pancreatic Tumors

    Get PDF
    Medical imaging represents the organs, tissues and structures underneath the outer layers of skin and bones etc. and stores information on normal anatomical structures for abnormality detection and diagnosis. In this thesis, tools and techniques are used to automate the analysis of medical images, emphasizing the detection of brain tumor anomalies from brain MRIs, Covid infections from lung CT images and pancreatic tumor from pancreatic CT images. Image processing methods such as filtering and thresholding models, geometry models, graph models, region-based analysis, connected component analysis, machine learning models, and recent deep learning models are used. The following problems for medical images : abnormality detection, abnormal region segmentation, interactive user interface to represent the results of detection and segmentation while receiving feedbacks from healthcare professionals to improve the analysis procedure, and finally report generation, are considered in this research. Complete interactive systems containing conventional models, machine learning, and deep learning methods for different types of medical abnormalities have been proposed and developed in this thesis. The experimental results show promising outcomes that has led to the incorporation of the methods for the proposed solutions based on the observations of the performance metrics and their comparisons. Although currently separate systems have been developed for brain tumor, Covid and pancreatic cancer, the success of the developed systems show a promising potential to combine them to form a generalized system for analyzing medical imaging of different types collected from any organs to detect any type of abnormalities

    A Binary Tree based Approach for Time Based Page Ranking in Search Engines

    Get PDF
    Search engines rank web pages according to different conditions. Some of them use publication time, some use last time of update, some checks the currency of the content of the web page. In this paper, a new algorithm is proposed which will work on the time of the web page, temporal information of the content and forms a binary tree to rank among web pages

    A survey of machine learning-based methods for COVID-19 medical image analysis

    Get PDF
    The ongoing COVID-19 pandemic caused by the SARS-CoV-2 virus has already resulted in 6.6 million deaths with more than 637 million people infected after only 30 months since the first occurrences of the disease in December 2019. Hence, rapid and accurate detection and diagnosis of the disease is the first priority all over the world. Researchers have been working on various methods for COVID-19 detection and as the disease infects lungs, lung image analysis has become a popular research area for detecting the presence of the disease. Medical images from chest X-rays (CXR), computed tomography (CT) images, and lung ultrasound images have been used by automated image analysis systems in artificial intelligence (AI)- and machine learning (ML)-based approaches. Various existing and novel ML, deep learning (DL), transfer learning (TL), and hybrid models have been applied for detecting and classifying COVID-19, segmentation of infected regions, assessing the severity, and tracking patient progress from medical images of COVID-19 patients. In this paper, a comprehensive review of some recent approaches on COVID-19-based image analyses is provided surveying the contributions of existing research efforts, the available image datasets, and the performance metrics used in recent works. The challenges and future research scopes to address the progress of the fight against COVID-19 from the AI perspective are also discussed. The main objective of this paper is therefore to provide a summary of the research works done in COVID detection and analysis from medical image datasets using ML, DL, and TL models by analyzing their novelty and efficiency while mentioning other COVID-19-based review/survey researches to deliver a brief overview on the maximum amount of information on COVID-19-based existing researches. [Figure not available: see fulltext.

    Interactive framework for Covid-19 detection and segmentation with feedback facility for dynamically improved accuracy and trust

    Get PDF
    Due to the severity and speed of spread of the ongoing Covid-19 pandemic, fast but accurate diagnosis of Covid-19 patients has become a crucial task. Achievements in this respect might enlighten future efforts for the containment of other possible pandemics. Researchers from various fields have been trying to provide novel ideas for models or systems to identify Covid-19 patients from different medical and non-medical data. AI-based researchers have also been trying to contribute to this area by mostly providing novel approaches of automated systems using convolutional neural network (CNN) and deep neural network (DNN) for Covid-19 detection and diagnosis. Due to the efficiency of deep learning (DL) and transfer learning (TL) models in classification and segmentation tasks, most of the recent AI-based researches proposed various DL and TL models for Covid-19 detection and infected region segmentation from chest medical images like X-rays or CT images. This paper describes a web-based application framework for Covid-19 lung infection detection and segmentation. The proposed framework is characterized by a feedback mechanism for self learning and tuning. It uses variations of three popular DL models, namely Mask R-CNN, UNet, and U-Net++. The models were trained, evaluated and tested using CT images of Covid patients which were collected from two different sources. The web application provide a simple user friendly interface to process the CT images from various resources using the chosen models, thresholds and other parameters to generate the decisions on detection and segmentation. The models achieve high performance scores for Dice similarity, Jaccard similarity, accuracy, loss, and precision values. The U-Net model outperformed the other models with more than 98% accuracy

    A comparative study of different pre-trained deeplearning models and custom CNN for pancreatic tumor detection

    Get PDF
    Artificial Intelligence and its sub-branches like MachineLearning (ML) and Deep Learning (DL) applications have the potential to have positive effects that can directly affect human life. Medical imaging is briefly making the internal structure of the human body visible with various methods. With deep learning models, cancer detection, which is one of the most lethal diseases in the world, can be made possible with high accuracy. Pancreatic Tumor detection, which is one of the cancer types with the highest fatality rate, is one of the main targets of this project, together with the data set of computed tomography images,which is one of the medical imaging techniques and has an effective structure in Pancreatic Cancer imaging. In the field of image classification, which is a computer vision task, the transfer learning technique, which has gained popularity in recent years, has been applied quite frequently. Using pre-trained models werepreviously trained on a fairly large dataset and using them on medical images is common nowadays. The main objective of this article is to use this method, which is very popular inthe medical imaging field, in the detection of PDAC, one of the deadliest types of pancreatic cancer, and to investigate how it per-forms compared to the custom model created and trained from scratch. The pre-trained models which are used in this project areVGG-16 and ResNet, which are popular Convolutional Neutral Network models, for Pancreatic Tumor Detection task. With the use of these models, early diagnosis of pancreatic cancer, which progresses insidiously and therefore does not spread to neighboring tissues and organs when the treatment process is started, may be possible. Due to the abundance of medical images reviewed by medical professionals, which is one of the main causes for heavy workload of healthcare systems, this applicationcan assist radiologists and other specialists in Pancreatic Tumor detection by providing faster and more accurate method

    Emotion and Sentiment Analysis from Twitter Text

    No full text
    Online social networks have emerged as new platform that provide people an arena to share their views and perspectives on different issues and subjects with their friends, family, and other users. We can share our thoughts, mental states, moments and stances on specific social, and political issues through texts, photos, audio/video messages and posts. Indeed, despite the availability of other forms of communication, text is still one of the most common ways of communication in a social network. Twitter was chosen in this research for data collection, experimentation and analysis. The research described in this thesis is to detect and analyze both sentiment and emotion expressed by people through texts in their Twitter posts. Tweets and replies on few recent topics were collected and a dataset was created with text, user, emotion and sentiment information. The customized dataset had user detail like user ID, user name, user's screen name, location, number of tweets/followers/likes/followees. Similarly, for textual information, tweet ID, tweet time, number of likes/replies/retweets, tweet text, reply text and few other text based data were collected. The texts of the dataset were then annotated with proper emotions and sentiments according to some benchmark models. The customized dataset was then used to detect sentiment and emotion from tweets and their replies using machine learning. The influence scores of users were also calculated based on various user-based and tweet-based parameters. Based on those information, both generalized and personalized recommendations were offered for users based on their Twitter activities

    Emotion and sentiment analysis from Twitter text

    No full text
    Online social networks have emerged as new platform that provide an arena for people to share their views and perspectives on different issues and subjects with their friends, family, relatives, etc. We can share our thoughts, mental state, moments, stand on specific social, national, international issues through text, photos, audio and video messages and posts. Indeed, despite the availability of other forms of communication, text is still one of the most common ways of communication in a social network. The target of the work described in this paper is to detect and analyze sentiment and emotion expressed by people from text in their twitter posts and use them for generating recommendations. We collected tweets and replies on few specific topics and created a dataset with text, user, emotion, sentiment information, etc. We used the dataset to detect sentiment and emotion from tweets and their replies and measured the influence scores of users based on various user-based and tweet-based parameters. Finally, we used the latter information to generate generalized and personalized recommendations for users based on their twitter activity. The method we used in this paper includes some interesting novelties such as, (i) including replies to tweets in the dataset and measurements, (ii) introducing agreement score, sentiment score and emotion score of replies in influence score calculation. (iii) generating general and personalized recommendation containing list of users who agreed on the same topic and expressed similar emotions and sentiments towards that particular topic. (C) 2019 Elsevier B.V. All rights reserved

    Tweet and user validation with supervised feature ranking and rumor classification

    No full text
    Filtering fake news from social network posts and detecting social network users who are responsible for generating and propagating these rumors have become two major issues with the increased popularity of social networking platforms. As any user can post anything on social media and that post can instantly propagate to all over the world, it is important to recognize if the post is rumor or not. Twitter is one of the most popular social networking platforms used for news broadcasting mostly as tweets and retweets. Hence, validating tweets and users based on their posts and behavior on Twitter has become a social, political and international issue. In this paper, we proposed a method to classify rumor and non-rumor tweets by applying a novel tweet and user feature ranking approach with Decision Tree and Logistic Regression that were applied on both tweet and user features extracted from a benchmark rumor dataset 'PHEME'. The effect of the ranking model was then shown by classifying the dataset with the ranked features and comparing them with the basic classifications with various combination of features. Both supervised classification algorithms (namely, Support Vector Machine, Naive Bayes, Random Forest and Logistic Regression) and deep learning algorithms (namely, Convolutional Neural Network and Long Short-Term Memory) were used for rumor detection. The classification accuracy showed that the feature ranking classification results were comparable to the original classification performances. The ranking models were also used to list the topmost tweets and users with different conditions and the results showed that even if the features were ranked differently by LR and RF, the topmost results for tweets and users for both rumors and non-rumors were the same

    A multi-modal emotion recognition system based on CNN-transformer deep learning technique

    No full text
    Emotion analysis is a subject that researchers from various fields have been working on for a long time. Different emotion detection methods have been developed for text, audio, photography, and video domains. Automated emotion detection methods using machine learning and deep learning models from videos and pictures have been an interesting topic for researchers. In this paper, a deep learning framework, in which CNN and Transformer models are combined, that classifies emotions using facial and body features extracted from videos is proposed. Facial and body features were extracted using OpenPose, and in the data preprocessing stage 2 operations such as new video creation and frame selection were tried. The experiments were conducted on two datasets, FABO and CK+. Our framework outperformed similar deep learning models with 99% classification accuracy for the FABO dataset, and showed remarkable performance over 90% accuracy for most versions of the framework for both the FABO and CK+ dataset
    corecore